perm filename LENAT.1[LET,JMC] blob sn#841433 filedate 1987-06-12 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00002 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	\input buslet[1,ra]
C00012 ENDMK
CāŠ—;
\input buslet[1,ra]
\jmclet
\vskip 30pt
\address 
Prof. Doug Lenat
MCC
3500 West Balcones Center Drive
Austin, Texas 78759
\address
Prof. Ed Feigenbaum
Computer Science Department
Stanford University
Stanford, CA 94305

\body
Dear Doug and Ed:

Thanks Doug, for the May 5 draft of your ``On the Treshholds of Knowledge''.
I think it will be a substantial contribution to the Foundation of AI Workshop
and more than justifies my persuading David Kirsh to invite it. I would have found 
it much more congenial to comment on it rather than Carl Hewitt's somewhat
inchoate effort. As you might guess, I don't agree with everything in it.

I want to begin by mentioning two matters that I hope you will find time to 
address in the final version of the paper. I'll then continue with comments you
probably won't want to address and finish with misprints.

1. My paper, copies enclosed, ``Some Expert Systems Need Common Sense'' deals
what you call the Breadth Hypothesis. Concretely, it deals with the fact that 
Mycin doesn't know about events occurring in time, so it can't plan, and
doesn't know that bacteria are organisms. The question I think you should address is
whether it can be told about these matters by adding rules or must be rewritten 
from scratch with a new ontology. If Mycin would have to be rewritten, would
newer expert systems also have to be rewritten? Will everything have to be
rewritten every five years? My guess is that a new foundation is needed, but Mycin's
rules could be salvaged and moved more or less intact to the new foundation.
Incidently, I see how planning can be used in a system with Mycin's goals, but
I am unclear where fundamental bacteriology comes in. Anyway, I hope you will
have a chance to treat the issues raised in my paper.

2. When knowledge from previously unconnected frames must be brought
together it seems to me that logic or some equivalent is clearly required.
Consider what I'll call ``The Horse at the Birthday Party Problem''. We
suppose the system knows about horses, e.g. has a frame with filled slots,
and similarly knows about children's birthday parties, but has no
knowledge connecting the two. It seems to me that this is true of many
people's knowledge of these two subjects.

Now someone proposes to bring a horse to the party and supplement the other
entertainment by giving the children rides. The following interaction between
the frames must be considered.

a. The horse must be suitable considering the ages of the children.

b. Is there a special role for the birthday boy?

c. Should the children all watch while each takes his turn or should the rides
be simultaneous with other entertainment?

d. How should fearful children be handled? Will there be too many?

e. Was it a success, and should we have a horse next year? 

I'm not sure the above are quite the right questions to illustrate it, but
it seems to me that planning the solution of these problems involves the 
conjunction of information from both frames and still others.  Probably you have
some doctrine, but nothing about this appears in your paper.

Here are some more detailed comments on the paper, not all of which
do I suppose you will want to address.

page 1 - The need for domain-specific knowledge has always been accepted.
It seems to me that you want to assert something stronger.

page 3 - It seems to me that there are qualitatively different kinds of
knowledge, e.g. the knowledge that bacteria are small organisms that
reproduces is quite different from knowledge of the relation between
symptoms and diagnosis.  It isn't apparent that new kinds of knowledge
can be added without changing the basic formalism.

page 4 - How does one extend the ontology?

page 5 - I suggest spelling out NMR as nuclear magnetic resonance, since
it only occurs once.

page 7 - Say when and why analogy is better than generalization.

page 7 - The reference to ``generalization'' seems to refer rather
to having a body of common sense knowledge rather than to having
statements about ``all animals''.  Most readers would expect the
latter meaning of ``generalization''.  Clearly both are needed.

page 9 - ``Napoleon'' is misspelled.

page 10 - I doubt Lakoff's view that metaphors are the main problem.
Besides it's often hard to tell whether a metaphor is being used or
whether a technical term has been created by using an old word or
phrase in a technical way.

page 11 - What is the knowledge base about learning?

page 17 occasionaly → occasionally

page 20 - With regard to what is worth more and less,
Doug, have you done the 235U and 239Pu examples yet?

page 21 - Both naive physics and your common sense understanding need
``seamlessly interface'' with the physicist's quantitative understanding
of water, e.g. density, temperature, specific heat, phase change,
heats of fusion and evaporation.

page 21 - I doubt that the major difficulties in interacting with expert
systems is rigid grammar.  Rather it is concerned with the semantical
generality of natural language.  Suppose we require the user to
write $(read book)$ to refer to reading a book, but allow this to
refer, according to context, to reading a particular copy of a book,
and to reading the book in general.  When we allow also $(write book)$,
still other semantic generalities become possible.
Wanting a book to read on the airplane opens still other semantic possibilities.

Unfortunately, the above remarks were scrawled notes in a copy of the
manuscript Doug gave me, and I have preferred to send this now rather
than take more time, especially as I now have to write up my comments
on Hewitt.

Again, congratulations on the paper.

\closing
Sincerely,       

John McCarthy    
\annotations
\vskip 1in
%Enclosure
\vskip 1in
JMC/ra 
\endletter
\end